Subspace distillation for continual learning

نویسندگان

چکیده

An ultimate objective in continual learning is to preserve knowledge learned preceding tasks while new tasks. To mitigate forgetting prior knowledge, we propose a novel distillation technique that takes into the account manifold structure of latent/output space neural network achieve this, approximate data up-to its first order, hence benefiting from linear subspaces model and maintain concepts. We demonstrate modeling with provides several intriguing properties, including robustness noise therefore effective for mitigating Catastrophic Forgetting learning. also discuss show how our proposed method can be adopted address both classification segmentation problems. Empirically, observe outperforms various methods on challenging datasets Pascal VOC, Tiny-Imagenet. Furthermore, seamlessly combined existing approaches improve their performances. The codes this article will available at https://github.com/csiro-robotics/SDCL.

برای دانلود باید عضویت طلایی داشته باشید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Continual Learning for Mobile Robots

Autonomous mobile robots should be able to learn incrementally and adapt to changes in the operating environment during their entire lifetime. This is referred to as continual learning. In this thesis, I propose an approach to continual learning which is based on adaptive state-space quantisation and reinforcement learning. Representational tools for continual learning should be constructive, a...

متن کامل

Variational Continual Learning

This paper develops variational continual learning (VCL), a simple but general framework for continual learning that fuses online variational inference (VI) and recent advances in Monte Carlo VI for neural networks. The framework can successfully train both deep discriminative models and deep generative models in complex continual learning settings where existing tasks evolve over time and enti...

متن کامل

Unsupervised Learning for Information Distillation

Current document archives are enormously large and constantly increasing and that makes it practically impossible to make use of them efficiently. To analyze and interpret large volumes of speech and text of these archives in multiple languages and produce structured information of interest to its user, information distillation techniques are used. In order to access the key information in resp...

متن کامل

Scalable Recollections for Continual Lifelong Learning

Given the recent success of Deep Learning applied to a variety of single tasks, it is natural to consider more human-realistic settings. Perhaps the most difficult of these settings is that of continual lifelong learning, where the model must learn online over a continuous stream of non-stationary data. A continual lifelong learning system must have three primary capabilities to succeed: it mus...

متن کامل

Episodic memory for continual model learning

Both the human brain and artificial learning agents operating in real-world or comparably complex environments are faced with the challenge of online model selection. In principle this challenge can be overcome: hierarchical Bayesian inference provides a principled method for model selection and it converges on the same posterior for both off-line (i.e. batch) and online learning. However, main...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

ژورنال

عنوان ژورنال: Neural Networks

سال: 2023

ISSN: ['1879-2782', '0893-6080']

DOI: https://doi.org/10.1016/j.neunet.2023.07.047